272 research outputs found

    Stealthy Opaque Predicates in Hardware -- Obfuscating Constant Expressions at Negligible Overhead

    Full text link
    Opaque predicates are a well-established fundamental building block for software obfuscation. Simplified, an opaque predicate implements an expression that provides constant Boolean output, but appears to have dynamic behavior for static analysis. Even though there has been extensive research regarding opaque predicates in software, techniques for opaque predicates in hardware are barely explored. In this work, we propose a novel technique to instantiate opaque predicates in hardware, such that they (1) are resource-efficient, and (2) are challenging to reverse engineer even with dynamic analysis capabilities. We demonstrate the applicability of opaque predicates in hardware for both, protection of intellectual property and obfuscation of cryptographic hardware Trojans. Our results show that we are able to implement stealthy opaque predicates in hardware with minimal overhead in area and no impact on latency

    Evil from Within: Machine Learning Backdoors through Hardware Trojans

    Full text link
    Backdoors pose a serious threat to machine learning, as they can compromise the integrity of security-critical systems, such as self-driving cars. While different defenses have been proposed to address this threat, they all rely on the assumption that the hardware on which the learning models are executed during inference is trusted. In this paper, we challenge this assumption and introduce a backdoor attack that completely resides within a common hardware accelerator for machine learning. Outside of the accelerator, neither the learning model nor the software is manipulated, so that current defenses fail. To make this attack practical, we overcome two challenges: First, as memory on a hardware accelerator is severely limited, we introduce the concept of a minimal backdoor that deviates as little as possible from the original model and is activated by replacing a few model parameters only. Second, we develop a configurable hardware trojan that can be provisioned with the backdoor and performs a replacement only when the specific target model is processed. We demonstrate the practical feasibility of our attack by implanting our hardware trojan into the Xilinx Vitis AI DPU, a commercial machine-learning accelerator. We configure the trojan with a minimal backdoor for a traffic-sign recognition system. The backdoor replaces only 30 (0.069%) model parameters, yet it reliably manipulates the recognition once the input contains a backdoor trigger. Our attack expands the hardware circuit of the accelerator by 0.24% and induces no run-time overhead, rendering a detection hardly possible. Given the complex and highly distributed manufacturing process of current hardware, our work points to a new threat in machine learning that is inaccessible to current security mechanisms and calls for hardware to be manufactured only in fully trusted environments

    Collision Timing Attack when Breaking 42 AES ASIC Cores

    Get PDF
    A collision timing attack which exploits the data-dependent timing characteristics of combinational circuits is demonstrated. The attack is based on the correlation collision attack presented at CHES 2010, and the timing attributes of combinational circuits when implementing complex functions, e.g., S-boxes, in hardware are exploited by the help of the scheme used in another CHES 2010 paper namely fault sensitivity analysis. Similarly to other side-channel collision attacks, our approach avoids the need for a hypothetical model to recover the secret materials. The results when attacking all 14 AES ASIC cores of the SASEBO LSI chips in three different process technologies, 130nm, 90nm, and 65nm, are presented. Successfully breaking the DPA-protected and the fault attack protected cores indicates the strength of the attack
    • …
    corecore